277 research outputs found

    A Needs Assessment to Build International Research Ethics Capacity at Moi University

    Get PDF
    International Research Ethics Partnership. This online version is the post-print version (final, peer-reviewed and accepted for publication version) of the published article. For the published version, refer to the article citation within the item record.International collaborators in biomedical sciences face ethical challenges in the design,review, and conduct of research. Challenges include differences in research ethics capacity, cultural differences in interpretation and application of ethical principles, and cooperation between ethics review boards at collaborating institutions. Indiana University School of Medicine (Indianapolis, USA) and Moi University Faculty of Health Sciences (Eldoret, Kenya)developed a Memorandum of Understanding (MOU) to establish greater cooperation between their ethics review boards, followed by a joint needs assessment to assess barriers to implementing the MOU. Focus groups and interviews at each institution revealed that while each side verbalized understanding and respect for the other's culture, there were misunderstandings deeply rooted in each culture that could potentially derail the collaboration. Although the participants at each university agreed on the major principles and issues in research ethics and on the importance attributed to them, a more in-depth evaluation of the responses revealed important differences. Methods to address these misunderstandings are outlined in the recommended Best Practices.Fogarty International Center at the NIH, Indiana University Division of General Internal Medicine and Geriatrics, Indiana University School of Medicine, Indiana University International Development Fund, Indiana Genomics Initiative, Lilly Endowment, Inc

    Can computerized clinical decision support systems improve practitioners' diagnostic test ordering behavior? A decision-maker-researcher partnership systematic review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Underuse and overuse of diagnostic tests have important implications for health outcomes and costs. Decision support technology purports to optimize the use of diagnostic tests in clinical practice. The objective of this review was to assess whether computerized clinical decision support systems (CCDSSs) are effective at improving ordering of tests for diagnosis, monitoring of disease, or monitoring of treatment. The outcome of interest was effect on the diagnostic test-ordering behavior of practitioners.</p> <p>Methods</p> <p>We conducted a decision-maker-researcher partnership systematic review. We searched MEDLINE, EMBASE, Ovid's EBM Reviews database, Inspec, and reference lists for eligible articles published up to January 2010. We included randomized controlled trials comparing the use of CCDSSs to usual practice or non-CCDSS controls in clinical care settings. Trials were eligible if at least one component of the CCDSS gave suggestions for ordering or performing a diagnostic procedure. We considered studies 'positive' if they showed a statistically significant improvement in at least 50% of test ordering outcomes.</p> <p>Results</p> <p>Thirty-five studies were identified, with significantly higher methodological quality in those published after the year 2000 (<it>p </it>= 0.002). Thirty-three trials reported evaluable data on diagnostic test ordering, and 55% (18/33) of CCDSSs improved testing behavior overall, including 83% (5/6) for diagnosis, 63% (5/8) for treatment monitoring, 35% (6/17) for disease monitoring, and 100% (3/3) for other purposes. Four of the systems explicitly attempted to reduce test ordering rates and all succeeded. Factors of particular interest to decision makers include costs, user satisfaction, and impact on workflow but were rarely investigated or reported.</p> <p>Conclusions</p> <p>Some CCDSSs can modify practitioner test-ordering behavior. To better inform development and implementation efforts, studies should describe in more detail potentially important factors such as system design, user interface, local context, implementation strategy, and evaluate impact on user satisfaction and workflow, costs, and unintended consequences.</p

    The GuideLine Implementability Appraisal (GLIA): development of an instrument to identify obstacles to guideline implementation

    Get PDF
    BACKGROUND: Clinical practice guidelines are not uniformly successful in influencing clinicians' behaviour toward best practices. Implementability refers to a set of characteristics that predict ease of (and obstacles to) guideline implementation. Our objective is to develop and validate a tool for appraisal of implementability of clinical guidelines. METHODS: Indicators of implementability were identified from the literature and used to create items and dimensions of the GuideLine Implementability Appraisal (GLIA). GLIA consists of 31 items, arranged into 10 dimensions. Questions from 9 of the 10 dimensions are applied individually to each recommendation of the guideline. Decidability and Executability are critical dimensions. Other dimensions are Global, Presentation and Formatting, Measurable Outcomes, Apparent Validity, Flexibility, Effect on Process of Care, Novelty/Innovation, and Computability. We conducted a series of validation activities, including validation of the construct of implementability, expert review of content for clarity, relevance, and comprehensiveness, and assessment of construct validity of the instrument. Finally, GLIA was applied to a draft guideline under development by national professional societies. RESULTS: Evidence of content validity and preliminary support for construct validity were obtained. The GLIA proved to be useful in identifying barriers to implementation in the draft guideline and the guideline was revised accordingly. CONCLUSION: GLIA may be useful to guideline developers who can apply the results to remedy defects in their guidelines. Likewise, guideline implementers may use GLIA to select implementable recommendations and to devise implementation strategies that address identified barriers. By aiding the design and operationalization of highly implementable guidelines, our goal is that application of GLIA may help to improve health outcomes, but further evaluation will be required to support this potential benefit

    A computer-based medical record system and personal digital assistants to assess and follow patients with respiratory tract infections visiting a rural Kenyan health centre

    Get PDF
    BACKGROUND: Clinical research can be facilitated by the use of informatics tools. We used an existing electronic medical record (EMR) system and personal data assistants (PDAs) to assess the characteristics and outcomes of patients with acute respiratory illnesses (ARIs) visiting a Kenyan rural health center. METHODS: We modified the existing EMR to include details on patients with ARIs. The EMR database was then used to identify patients with ARIs who were prospectively followed up by a research assistant who rode a bicycle to patients' homes and entered data into a PDA. RESULTS: A total of 2986 clinic visits for 2009 adult patients with respiratory infections were registered in the database between August 2002 and January 2005; 433 patients were selected for outcome assessments. These patients were followed up in the villages and assessed at 7 and 30 days later. Complete follow-up data were obtained on 381 patients (88%) and merged with data from the enrollment visit's electronic medical records and subsequent health center visits to assess duration of illness and complications. Symptoms improved at 7 and 30 days, but a substantial minority of patients had persistent symptoms. Eleven percent of patients sought additional care for their respiratory infection. CONCLUSION: EMRs and PDA are useful tools for performing prospective clinical research in resource constrained developing countries

    Leadership and capacity building in international chiropractic research: introducing the chiropractic academy for research leadership (CARL).

    Get PDF
    In an evidence-based health care environment, healthcare professions require a sustainable research culture to remain relevant. At present however, there is not a mature research culture across the chiropractic profession largely due to deficiencies in research capacity and leadership, which may be caused by a lack of chiropractic teaching programs in major universities. As a response to this challenge the Chiropractic Academy for Research Leadership, CARL, was created with the aim of develop a global network of successful early-career chiropractic researchers under the mentorship of three successful senior academics from Australia, Canada, and Denmark. The program centres upon an annual week-long program residential that rotates continental locations over the first three-year cycle and between residentials the CARL fellows work on self-initiated research and leadership initiatives. Through a competivite application process, the first cohort was selected and consists of 13 early career researchers from five professions in seven countries who represent diverse areas of interests of high relevance for chiropractic. The first residential was held in Odense, Denmark, with the second being planned in April 2018 in Edmonton, Canada, and the final residential to be held in Sydney, Australia in 2019

    Protocol for the 'e-Nudge trial' : a randomised controlled trial of electronic feedback to reduce the cardiovascular risk of individuals in general practice [ISRCTN64828380]

    Get PDF
    Background: Cardiovascular disease (including coronary heart disease and stroke) is a major cause of death and disability in the United Kingdom, and is to a large extent preventable, by lifestyle modification and drug therapy. The recent standardisation of electronic codes for cardiovascular risk variables through the United Kingdom's new General Practice contract provides an opportunity for the application of risk algorithms to identify high risk individuals. This randomised controlled trial will test the benefits of an automated system of alert messages and practice searches to identify those at highest risk of cardiovascular disease in primary care databases. Design: Patients over 50 years old in practice databases will be randomised to the intervention group that will receive the alert messages and searches, and a control group who will continue to receive usual care. In addition to those at high estimated risk, potentially high risk patients will be identified who have insufficient data to allow a risk estimate to be made. Further groups identified will be those with possible undiagnosed diabetes, based either on elevated past recorded blood glucose measurements, or an absence of recent blood glucose measurement in those with established cardiovascular disease. Outcome measures: The intervention will be applied for two years, and outcome data will be collected for a further year. The primary outcome measure will be the annual rate of cardiovascular events in the intervention and control arms of the study. Secondary measures include the proportion of patients at high estimated cardiovascular risk, the proportion of patients with missing data for a risk estimate, and the proportion with undefined diabetes status at the end of the trial

    Designing an automated clinical decision support system to match clinical practice guidelines for opioid therapy for chronic pain

    Get PDF
    Abstract Background Opioid prescribing for chronic pain is common and controversial, but recommended clinical practices are followed inconsistently in many clinical settings. Strategies for increasing adherence to clinical practice guideline recommendations are needed to increase effectiveness and reduce negative consequences of opioid prescribing in chronic pain patients. Methods Here we describe the process and outcomes of a project to operationalize the 2003 VA/DOD Clinical Practice Guideline for Opioid Therapy for Chronic Non-Cancer Pain into a computerized decision support system (DSS) to encourage good opioid prescribing practices during primary care visits. We based the DSS on the existing ATHENA-DSS. We used an iterative process of design, testing, and revision of the DSS by a diverse team including guideline authors, medical informatics experts, clinical content experts, and end-users to convert the written clinical practice guideline into a computable algorithm to generate patient-specific recommendations for care based upon existing information in the electronic medical record (EMR), and a set of clinical tools. Results The iterative revision process identified numerous and varied problems with the initially designed system despite diverse expert participation in the design process. The process of operationalizing the guideline identified areas in which the guideline was vague, left decisions to clinical judgment, or required clarification of detail to insure safe clinical implementation. The revisions led to workable solutions to problems, defined the limits of the DSS and its utility in clinical practice, improved integration into clinical workflow, and improved the clarity and accuracy of system recommendations and tools. Conclusions Use of this iterative process led to development of a multifunctional DSS that met the approval of the clinical practice guideline authors, content experts, and clinicians involved in testing. The process and experiences described provide a model for development of other DSSs that translate written guidelines into actionable, real-time clinical recommendations.http://deepblue.lib.umich.edu/bitstream/2027.42/78267/1/1748-5908-5-26.xmlhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/2/1748-5908-5-26.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/3/1748-5908-5-26-S3.TIFFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/4/1748-5908-5-26-S2.TIFFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/5/1748-5908-5-26-S1.TIFFPeer Reviewe

    Do computerised clinical decision support systems for prescribing change practice? A systematic review of the literature (1990-2007)

    Get PDF
    Computerised clinical decision support systems (CDSSs) are used widely to improve quality of care and patient outcomes. This systematic review evaluated the impact of CDSSs in targeting specific aspects of prescribing, namely initiating, monitoring and stopping therapy. We also examined the influence of clinical setting (institutional vs ambulatory care), system- or user-initiation of CDSS, multi-faceted vs stand alone CDSS interventions and clinical target on practice changes in line with the intent of the CDSS. We searched Medline, Embase and PsychINFO for publications from 1990-2007 detailing CDSS prescribing interventions. Pairs of independent reviewers extracted the key features and prescribing outcomes of methodologically adequate studies (experiments and strong quasi-experiments). 56 studies met our inclusion criteria, 38 addressing initiating, 23 monitoring and three stopping therapy. At the time of initiating therapy, CDSSs appear to be somewhat more effective after, rather than before, drug selection has occurred (7/12 versus 12/26 studies reporting statistically significant improvements in favour of CDSSs on = 50% of prescribing outcomes reported). CDSSs also appeared to be effective for monitoring therapy, particularly using laboratory test reminders (4/7 studies reporting significant improvements in favour of CDSSs on the majority of prescribing outcomes). None of the studies addressing stopping therapy demonstrated impacts in favour of CDSSs over comparators. The most consistently effective approaches used system-initiated advice to fine-tune existing therapy by making recommendations to improve patient safety, adjust the dose, duration or form of prescribed drugs or increase the laboratory testing rates for patients on long-term therapy. CDSSs appeared to perform better in institutional compared to ambulatory settings and when decision support was initiated automatically by the system as opposed to user initiation. CDSSs implemented with other strategies such as education were no more successful in improving prescribing than stand alone interventions. Cardiovascular disease was the most studied clinical target but few studies demonstrated significant improvements on the majority of prescribing outcomes. Our understanding of CDSS impacts on specific aspects of the prescribing process remains relatively limited. Future implementation should build on effective approaches including the use of system-initiated advice to address safety issues and improve the monitoring of therapy

    Do Physicians Know When Their Diagnoses Are Correct?

    Get PDF
    This study explores the alignment between physicians' confidence in their diagnoses and the “correctness” of these diagnoses, as a function of clinical experience, and whether subjects were prone to over-or underconfidence. Design : Prospective, counterbalanced experimental design. Setting : Laboratory study conducted under controlled conditions at three academic medical centers. Participants : Seventy-two senior medical students, 72 senior medical residents, and 72 faculty internists. Intervention : We created highly detailed, 2-to 4-page synopses of 36 diagnostically challenging medical cases, each with a definitive correct diagnosis. Subjects generated a differential diagnosis for each of 9 assigned cases, and indicated their level of confidence in each diagnosis. Measurements And Main Results : A differential was considered “correct” if the clinically true diagnosis was listed in that subject's hypothesis list. To assess confidence, subjects rated the likelihood that they would, at the time they generated the differential, seek assistance in reaching a diagnosis. Subjects' confidence and correctness were “mildly” aligned (Κ=.314 for all subjects, .285 for faculty, .227 for residents, and .349 for students). Residents were overconfident in 41% of cases where their confidence and correctness were not aligned, whereas faculty were overconfident in 36% of such cases and students in 25%. Conclusions : Even experienced clinicians may be unaware of the correctness of their diagnoses at the time they make them. Medical decision support systems, and other interventions designed to reduce medical errors, cannot rely exclusively on clinicians' perceptions of their needs for such support.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/74850/1/j.1525-1497.2005.30145.x.pd
    corecore